Type :
- AI News
- AI Tools
- AI Cases
- AI Tutorial
2024-08-09 09:16:52.AIbase.10.9k
OpenAI Indicates Its Latest GPT-4o Model Has a Risk Rating of 'Medium'
OpenAI recently released the GPT-4o System Card, detailing the safety measures and risk assessments prior to the launch of the new model. GPT-4o was officially launched in May, and the assessment showed an overall risk rating of 'medium', with major risks concentrated in cybersecurity, biological threats, persuasive capability, and model autonomy. Researchers found that while GPT-4o may be more persuasive in influencing reader opinions, it did not surpass humans overall. At the time of releasing the system card, OpenAI faced criticism from internal employees and state senators questioning its decision-making.
2024-01-04 17:01:43.AIbase.4.7k
Scientists Claim There Is a 5% Chance AI Could Lead to Human Extinction
AI researchers generally believe that the development of superintelligent AI poses an insignificant yet very small risk of human extinction. The largest survey of AI researchers shows that about 58% of them believe there is a 5% chance of leading to human extinction or other extreme negative AI-related outcomes. Researchers have a wide range of disagreements over the timeline for future AI technological milestones and feel uncertain about the social consequences that AI may bring. The survey also reveals the urgent concerns AI researchers have regarding AI-driven scenarios such as deepfakes, manipulation of public opinion, and weaponization.